Goto

Collaborating Authors

 lie derivative




CoIn-SafeLink: Safety-critical Control With Cost-sensitive Incremental Random Vector Functional Link Network

Hu, Songqiao, Liu, Zeyi, He, Xiao, Shen, Zhen

arXiv.org Artificial Intelligence

Control barrier functions (CBFs) play a crucial role in achieving the safety-critical control of robotic systems theoretically. However, most existing methods rely on the analytical expressions of unsafe state regions, which is often impractical for irregular and dynamic unsafe regions. In this paper, a novel CBF construction approach, called CoIn-SafeLink, is proposed based on cost-sensitive incremental random vector functional-link (RVFL) neural networks. By designing an appropriate cost function, CoIn-SafeLink achieves differentiated sensitivities to safe and unsafe samples, effectively achieving zero false-negative risk in unsafe sample classification. Additionally, an incremental update theorem for CoIn-SafeLink is proposed, enabling precise adjustments in response to changes in the unsafe region. Finally, the gradient analytical expression of the CoIn-SafeLink is provided to calculate the control input. The proposed method is validated on a 3-degree-of-freedom drone attitude control system. Experimental results demonstrate that the method can effectively learn the unsafe region boundaries and rapidly adapt as these regions evolve, with an update speed approximately five times faster than comparison methods. The source code is available at https://github.com/songqiaohu/CoIn-SafeLink.


Neural Lyapunov Function Approximation with Self-Supervised Reinforcement Learning

McCutcheon, Luc, Gharesifard, Bahman, Fallah, Saber

arXiv.org Artificial Intelligence

Control Lyapunov functions are traditionally used to design a controller which ensures convergence to a desired state, yet deriving these functions for nonlinear systems remains a complex challenge. This paper presents a novel, sample-efficient method for neural approximation of nonlinear Lyapunov functions, leveraging self-supervised Reinforcement Learning (RL) to enhance training data generation, particularly for inaccurately represented regions of the state space. The proposed approach employs a data-driven World Model to train Lyapunov functions from off-policy trajectories. The method is validated on both standard and goal-conditioned robotic tasks, demonstrating faster convergence and higher approximation accuracy compared to the state-of-the-art neural Lyapunov approximation baseline. The code is available at: https://github.com/CAV-Research-Lab/SACLA.git


Nonlinear Modeling and Observability of a Planar Multi-Link Robot with Link Thrusters

Andrews, Nicholas B., Morgansen, Kristi A.

arXiv.org Artificial Intelligence

This work is motivated by the development of cooperative teams of small, soft underwater robots designed to accomplish complex tasks through collective behavior. These robots take inspiration from biology: salps are gelatinous, jellyfish-like marine animals that utilize jet propulsion for maneuvering and can physically connect to form dynamic chains of arbitrary shape and size. The primary contributions of this research are twofold: first, we adapt a planar nonlinear multi-link snake robot model to model a planar multi-link salp-inspired system by removing joint actuators, introducing link thrusters, and allowing for non-uniform link lengths, masses, and moments of inertia. Second, we conduct a nonlinear observability analysis of the multi-link system with link thrusters, showing that the link angles, angular velocities, masses, and moments of inertia are locally observable when equipped with inertial measurement units and operating under specific thruster conditions. This research provides a theoretical foundation for modeling and estimating both the state and intrinsic parameters of a multi-link system with link thrusters, which are essential for effective controller design and performance.


Relative Pose Observability Analysis Using Dual Quaternions

Andrews, Nicholas B., Morgansen, Kristi A.

arXiv.org Artificial Intelligence

Relative pose (position and orientation) estimation is an essential component of many robotics applications. Fiducial markers, such as the AprilTag visual fiducial system, yield a relative pose measurement from a single marker detection and provide a powerful tool for pose estimation. In this paper, we perform a Lie algebraic nonlinear observability analysis on a nonlinear dual quaternion system that is composed of a relative pose measurement model and a relative motion model. We prove that many common dual quaternion expressions yield Jacobian matrices with advantageous block structures and rank properties that are beneficial for analysis. We show that using a dual quaternion representation yields an observability matrix with a simple block triangular structure and satisfies the necessary full rank condition.


Improving Equivariant Model Training via Constraint Relaxation

Pertigkiozoglou, Stefanos, Chatzipantazis, Evangelos, Trivedi, Shubhendu, Daniilidis, Kostas

arXiv.org Artificial Intelligence

Equivariant neural networks have been widely used in a variety of applications due to their ability to generalize well in tasks where the underlying data symmetries are known. Despite their successes, such networks can be difficult to optimize and require careful hyperparameter tuning to train successfully. In this work, we propose a novel framework for improving the optimization of such models by relaxing the hard equivariance constraint during training: We relax the equivariance constraint of the network's intermediate layers by introducing an additional non-equivariance term that we progressively constrain until we arrive at an equivariant solution. By controlling the magnitude of the activation of the additional relaxation term, we allow the model to optimize over a larger hypothesis space containing approximate equivariant networks and converge back to an equivariant solution at the end of training. We provide experimental results on different state-of-the-art network architectures, demonstrating how this training framework can result in equivariant models with improved generalization performance.


Rademacher Complexity of Neural ODEs via Chen-Fliess Series

Hanson, Joshua, Raginsky, Maxim

arXiv.org Artificial Intelligence

Several recent works have examined continuous-depth idealizations of deep neural nets, viewing them as continuous-time ordinary differential equation (ODE) models with either fixed or time-varying parameters. Traditional discrete-layer nets can be recovered by applying an appropriate temporal discretization scheme, e.g., the Euler or Runge-Kutta methods. In applications, this perspective has resulted in advantages concerning regularization (Kelly et al., 2020; Kobyzev et al., 2021; Pal et al., 2021), efficient parameterization (Queiruga et al., 2020), convergence speed (Chen et al., 2023), applicability to non-uniform data (Sahin and Kozat, 2019), among others. As a theoretical tool, continuous-depth idealizations have lead to better understanding of the contribution of depth to model expressiveness and generalizability (Marion, 2023; Massaroli et al., 2020), new or improved training strategies via framing as an optimal control problem (Corbett and Kangin, 2022), and novel model variations (Jia and Benson, 2019; Peluchetti and Favaro, 2020). Considered as generic control systems, continuous-depth nets can admit a number of distinct inputoutput configurations depending on how the control system "anatomy" is delegated. Controlled neural ODEs (Kidger et al., 2020) and continuous-time recurrent neural nets (Fermanian et al., 2021) treat the (time-varying) control signal as the input to the model; the initial condition is either fixed or treated as a trainable parameter; the (time-varying) output signal is the model output; and any free parameters of the vector fields (weights) are held constant in time.


Optimized Control Invariance Conditions for Uncertain Input-Constrained Nonlinear Control Systems

Brunke, Lukas, Zhou, Siqi, Che, Mingxuan, Schoellig, Angela P.

arXiv.org Artificial Intelligence

Providing safety guarantees for learning-based controllers is important for real-world applications. One approach to realizing safety for arbitrary control policies is safety filtering. If necessary, the filter modifies control inputs to ensure that the trajectories of a closed-loop system stay within a given state constraint set for all future time, referred to as the set being positive invariant or the system being safe. Under the assumption of fully known dynamics, safety can be certified using control barrier functions (CBFs). However, the dynamics model is often either unknown or only partially known in practice. Learning-based methods have been proposed to approximate the CBF condition for unknown or uncertain systems from data; however, these techniques do not account for input constraints and, as a result, may not yield a valid CBF condition to render the safe set invariant. In this work, we study conditions that guarantee control invariance of the system under input constraints and propose an optimization problem to reduce the conservativeness of CBF-based safety filters. Building on these theoretical insights, we further develop a probabilistic learning approach that allows us to build a safety filter that guarantees safety for uncertain, input-constrained systems with high probability. We demonstrate the efficacy of our proposed approach in simulation and real-world experiments on a quadrotor and show that we can achieve safe closed-loop behavior for a learned system while satisfying state and input constraints.


Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs

Wise, Emmett, Cheng, Qilong, Kelly, Jonathan

arXiv.org Artificial Intelligence

Autonomous vehicles (AVs) fuse data from multiple sensors and sensing modalities to impart a measure of robustness when operating in adverse conditions. Radars and cameras are popular choices for use in sensor fusion; although radar measurements are sparse in comparison to camera images, radar scans penetrate fog, rain, and snow. However, accurate sensor fusion depends upon knowledge of the spatial transform between the sensors and any temporal misalignment that exists in their measurement times. During the life cycle of an AV, these calibration parameters may change, so the ability to perform in-situ spatiotemporal calibration is essential to ensure reliable long-term operation. State-of-the-art 3D radar-camera spatiotemporal calibration algorithms require bespoke calibration targets that are not readily available in the field. In this paper, we describe an algorithm for targetless spatiotemporal calibration that does not require specialized infrastructure. Our approach leverages the ability of the radar unit to measure its own ego-velocity relative to a fixed, external reference frame. We analyze the identifiability of the spatiotemporal calibration problem and determine the motions necessary for calibration. Through a series of simulation studies, we characterize the sensitivity of our algorithm to measurement noise. Finally, we demonstrate accurate calibration for three real-world systems, including a handheld sensor rig and a vehicle-mounted sensor array. Our results show that we are able to match the performance of an existing, target-based method, while calibrating in arbitrary, infrastructure-free environments.